Rethinking proximity in modern chip design

Organizations need to rethink semiconductor proximity and embrace intentional disaggregation to balance latency, energy and cost from die to data center
 
2 min 25 sec read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
2 min 25 sec read
Share
Rethinking proximity in modern chip design

In engineering, and in life, distance carries a cost. In , that cost shows up as latency, energy per bit moved and complexity.

As the industry embraces disaggregation from on-die placement to chiplets, packages, boards and even racks, the design question is no longer “Can we keep everything close?” but “Where does distance make sense, and how do we engineer for it?”

Why proximity still matters

Every integrated circuit balances power, performance and area (PPA). Wire delays grow with distance and longer interconnects leak energy and complicate timing closure. Even when architectural gains suggest spreading logic or memory apart, physics keeps score: nanoseconds accumulate, buffers multiply and verification expands.

The new design space: disaggregation across scales

Modern systems now choose where to place functionality along several distance tiers:

  • On-die: Placement, floor planning, clocking and NoC topology determine critical path length and congestion.
  • In-package (chiplets): Heterogeneous dies connected over advanced interconnects trade monolithic scale for yield, mix-and-match processes, and modularity
  • On-board / in-system: Coherent and near-memory fabrics let accelerators and CPUs share capacity without monolithic scaling
  • Across the rack/data center: Low-latency fabrics extend pooling and composability beyond a single motherboard

The strategic shift isn’t near versus far, it’s intentional distance: placing resources exactly far enough to optimize cost, yield, resilience and sustainability, while holding latency and energy within budget.

Redefining proximity

  • Chiplet interconnects (like Universal Chiplet Interconnect Express or UCIe): UCle enables chiplets from different vendors to communicate with high bandwidth and low latency, almost as if they were a single, monolithic chip
  • Coherent system fabrics (like Compute Express Link or CXL): With CXL, organizations can share memory pools across CPUs, GPUs and accelerators — dramatically improving system performance even when resources are distributed.

It’s especially relevant for AI and HPC workloads, where real-time data access and efficient memory usage are key to unlocking speed at scale

  • Low-latency data-center fabrics (Remote Direct Memory Access – RDMA): In data center design, RDMA allows compute nodes to access memory across the network, bypassing the CPU and dramatically reducing latency
  • Packaging advances: Higher bump density and shorter vertical hops shrink effective distance between logic and memory
  • Emerging directions: Backside power delivery, silicon photonics/optical I/O, and wafer-scale integration further blur the line between near and far

 

HCLTech recognized as a Leader in Everest Group's Semiconductor Engineering Services PEAK Matrix® Assessment 2024

 

HCLTech partners with clients to adopt the solutions highlighted above. Below is a graph highlighting the ROI of such partnerships and investments.

ROI

Impact

Faster Time-to-Market 20-30% shorter design cycles via cloud-EDA and verified IP reuse
Lower Development Cost 15-25% reduction through turnkey silicon and verification services
Higher System Efficiency 10-20% better performance per watt from optimized CXL architecture
CapEx Savings for End Users Up to 40% less memory overprovisioning via CXL pooling
Future-Proof Investment Early entry into $30B+ CXL ecosystem with Arm and Synopsys alignment

From proximity to purpose

Reframing the question throughout: it’s not “how far is too far?” but “when does distance earn its keep?”

With chiplets, coherent memory fabrics and low-latency interconnects, proximity becomes a design choice; disaggregate where it boosts yield, modularity or cost and keep close what’s latency- or energy-critical.

Distance in is a decision. The task is to bridge it intelligently so that “far” behaves like “near” where it matters.

Share On
_ Cancel

Contact Us

Want more information? Let’s connect